Supporting Analysis of Dimensionality Reduction Results with Contrastive Learning
نویسندگان
چکیده
منابع مشابه
Dimensionality Reduction and Learning
The theme of these two lectures is that for L2 methods we need not work in infinite dimensional spaces. In particular, we can unadaptively find and work in a low dimensional space and achieve about as good results. These results question the need for explicitly working in infinite (or high) dimensional spaces for L2 methods. In contrast, for sparsity based methods (including L1 regularization),...
متن کاملSupporting regenerative medicine by integrative dimensionality reduction.
OBJECTIVE The assessment of the developmental potential of stem cells is a crucial step towards their clinical application in regenerative medicine. It has been demonstrated that genome-wide expression profiles can predict the cellular differentiation stage by means of dimensionality reduction methods. Here we show that these techniques can be further strengthened to support decision making wit...
متن کاملDimensionality reduction for supervised learning
Outline Motivation Dimensionality reduction Experimental setup Results Discussion References Outline Motivation Supervised learning High dimensionality Dimensionality reduction Principal component analysis Random projections Experimental setup Algorithms and datasets Procedure Results Discussion Outline Motivation Dimensionality reduction Experimental setup Results Discussion References Motivat...
متن کاملRelevance Learning for Dimensionality Reduction
Nonlinear dimensionality reduction (NLDR) techniques offer powerful data visualization schemes capturing nonlinear effects of the data at the costs of a decreased interpretability of the projection: Unlike for linear counterparts such as principal component analysis, the relevance of the original feature dimensions for the NLDR projection is not clear. In this contribution we propose relevance ...
متن کاملTransfer Learning via Dimensionality Reduction
Transfer learning addresses the problem of how to utilize plenty of labeled data in a source domain to solve related but different problems in a target domain, even when the training and testing problems have different distributions or features. In this paper, we consider transfer learning via dimensionality reduction. To solve this problem, we learn a low-dimensional latent feature space where...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Visualization and Computer Graphics
سال: 2020
ISSN: 1077-2626,1941-0506,2160-9306
DOI: 10.1109/tvcg.2019.2934251